This is a simple game and you state the preferences in the problem statement. Perfect rationality is pretty easy for this constrained case. These are NOT sufficient to perfectly predict (and know that the opponent has perfect prediction) the outcome.
Such prediction is actually impossible. It’s not a matter of “sufficiently intelligent”, as a perfect simulation is recursive—it includes simulating your opponent’s simulation of your simulation of your opponent’s simulation of you (etc. etc.). Actual mirroring or cross-causality (where your decision CANNOT diverge from your opponent’s) requires full state duplication and prevention of identity divergence. This is not a hurdle that can be overcome generally.
This is similar (or maybe identical to) Hofstadter’s https://en.wikipedia.org/wiki/Superrationality, and is very distinct from only perfect rationality. It’s a cute theory, which fails in all imaginable situations where agential identity is anything like our current experience.
The agents are provided information about their perfect rationality and mutual knowledge of each other’s preferences. I showed a resolution to the recursive simulation problem. The agents can avoid it by predisposing themselves.
This is a simple game and you state the preferences in the problem statement. Perfect rationality is pretty easy for this constrained case. These are NOT sufficient to perfectly predict (and know that the opponent has perfect prediction) the outcome.
Such prediction is actually impossible. It’s not a matter of “sufficiently intelligent”, as a perfect simulation is recursive—it includes simulating your opponent’s simulation of your simulation of your opponent’s simulation of you (etc. etc.). Actual mirroring or cross-causality (where your decision CANNOT diverge from your opponent’s) requires full state duplication and prevention of identity divergence. This is not a hurdle that can be overcome generally.
This is similar (or maybe identical to) Hofstadter’s https://en.wikipedia.org/wiki/Superrationality, and is very distinct from only perfect rationality. It’s a cute theory, which fails in all imaginable situations where agential identity is anything like our current experience.
The agents are provided information about their perfect rationality and mutual knowledge of each other’s preferences. I showed a resolution to the recursive simulation problem. The agents can avoid it by predisposing themselves.